Flying back to China soon. Before leaving, here comes a simple blog testing Tensorflow 2.0. In this blog, I strictly follow Amita Kapoor and Ajit Jaokar’s free book Getting Started with TensorFlow 2.0.
For simplicity, let’s try it out directly:
Github Blog
Flying back to China soon. Before leaving, here comes a simple blog testing Tensorflow 2.0. In this blog, I strictly follow Amita Kapoor and Ajit Jaokar’s free book Getting Started with TensorFlow 2.0.
For simplicity, let’s try it out directly:
Finally, I’m going to talk about Raspberry Pi a bit. It has been discussed so widely that NOTHING special should be further talked from me.
To write something about Raspberry Pi is to say GOOD BYE to my Raspberry Pi 3B, and WELCOME Raspberry Pi 4 at the same time. Our target today is to build an AI edge computing end as the following video:
Before everythig starts, let’s carry out a simple comparison between Raspberry Pi 3B and Raspberry Pi 3B+:
Raspberry Pi 3B vs. Raspberry Pi 3B+
Then, we just follow the following 2 blogs Run NCS Applications on Raspberry Pi and Adding AI to the Raspberry Pi with the Movidius Neural Compute Stick to utilize these 2 outdated products:
| Raspberry Pi 3B | Intel Movidius Neural Compute Stick 1 |
|---|---|
![]() |
![]() |
Intel Movidius Neural Compute Stick 1 is NOT listed on Intel’s official website any more. But github support for Intel Movidius Neural Compute Stick 1 can be found at https://github.com/movidius/ncsdk.
I met a bug that I failed to find the solution forever. Three related forum issues are enumerated as follows, and you are welcome to carry out further investigation.
Anyway, ip -c address gives out the following ERROR message:
1 | ➜ ~ ip -c address |
An arbitrary 802.11 N Wifi 2.4G USB dongle is able to activate Wifi for Raspberry Pi 3B in my case. BTW, mine is EDUP EP-N8508GS.
1 | ➜ ~ ip -c address |
To bring up Wifi is just to plug in/out the Wifi USB dongle.
We FIRST need to have ncsdk installed. Yup, here, as described in Run NCS Applications on Raspberry Pi, we carry out the installation directly under folder ...../ncsdk/api/src.
1 | ➜ src git:(ncsdk2) ✗ make -j4 |
Prerequisite:
For simplicity, you can grab the above three packages directly from Raspbian’s repository. However, it seems the default boost from Raspbian’s repository does NOT support python3 but ONLY python2. Therefore, in my case, I built boost 1.71, flann 1.9.1 and OpenCV 4.1.1 from sractch from source.
The reason why we need to cross-compile flann is because of the limited memory of Raspberry Pi 3B.
In my case, I’m using crosstool-NG and following blog: RPi Linaro GCC Compilation to carry out my cross-compiling. Details about cross-compiling can be found in my next blog Build Toolchain Using Crosstool-NG
1 | ➜ ~ cat /proc/cpuinfo |
This issue seems to be an OLD topic. More details about the development history of Raspberry Pi can be found on Wikipedia.
Then, we build Caffe 1 from source.
Now, let’s test the installation of NCS, by the example hello_ncs_py from ncappzoo.
1 | ➜ hello_ncs_py git:(ncsdk2) python hello_ncs.py |
Today, we’re going to build our own toolchain using crosstool-NG. There are so many cases that we want to build a particular toolchain for a specific embedded system development board. One MOST important reason for that is probably because the specific development board has limited resource to build upon, therefore, the time-consumption to build some software on the development board is VERY inefficient. In the other way around, cross compiling on the host PC is MORE efficient with respective to the time-consumption.
In this blog, for simplicity, we take Raspberry Pi 3B as our demo development board, for which we are building the cross compiler. Some references can be found at:
How to install crosstool-NG are thoroughly summarized at its official website. In my case, I had it installed under folder /opt/compilers/crosstool-ng. Let’s take a look:
1 | longervision-GT72-6QE% pwd |
And make sure /opt/compilers/crosstool-ng/bin is under environment variable PATH.
Under any directory that you want to save your .config file, we can configure our target cross compiler.
1 | longervision-GT72-6QE% ct-ng menuconfig |

According to elinux: RPi Linaro GCC Compilation, we need to do the following selections:
Paths and misc options:
Target options:
hf to the tuple (EXPERIMENTAL): tickedToolchain options:
Operating System
1 | ➜ ~ uname -a |
Binary utilities:
1 | ➜ ~ apt show binutils |
C-library:
1 | ➜ ~ apt show libc-bin |
C compiler:
1 | ➜ ~ apt show gcc |
After we save the configuration to file .config, we Exit the crosstool-NG configuration dialog.
Please remember to:
unset LD_LIBRARY_PATH before building. Otherwise, you’ll meet some ERROR messages.mkdir ~/src before building. Otherwise, whenever you tried to rerun ct-ng build, you’ll have to download ALL required packages from scratch.1 | longervision-GT72-6QE% ct-ng build |
This process may take a while. I’m going to sleep tonight. Continue tomorrow…
Alright, let’s continue today. Glad to know it’s built successfully.
Let’s FIRST take a look at what’s under the current folder.
1 | longervision-GT72-6QE% ls |
And then, let’s take a look at what’s built under the specified destination folder.
1 | longervision-GT72-6QE% ls ~/....../CrossCompile/RPi |
Finally, let’s take a look at the version of our built cross compilers for Raspberry Pi 3B.
1 | longervision-GT72-6QE% arm-rpi-linux-gnueabihf-gcc --version |
Additional issue: It seems current crosstool-NG does NOT officially support Python. Please refer to my issue at crosstool-NG
For most of the packages nowadays, they are
./configure -> make -> make installmkdir build -> cd build -> ccmake ../ -> make -> make installHow to use our generated toolchain to compile/build our target packages?
Today, we’re going to take the package flann as an example, which are to be built with CMake.
By following CMake Toolchains - Cross Compiling for Linux, we FIRST modify flann CMakeLists.txt a bit by adding the following lines before project(flann).
1 | cmake_minimum_required(VERSION 2.6) |
${tools}/bin.In addition, for the emulated Raspberry Pi 3B sysroot, hdf5 is NOT supported. Therefore, let’s simply comment out the following line in flann CMakeLists.txt.
1 | #find_hdf5() |
Now, let’s start cross-compiling flann.
1 | longervision-GT72-6QE% mkdir build |
and press c and then t, you’ll see the cross compiling toolchains have been automatically configured as:
1 | BUILD_CUDA_LIB *OFF |
The LAST step before make is to modify some of the parameters accordingly, in my case:
/usr/local/ -> under which directory you want to install, which can be IGNORED for nowNow, press c, you’ll see
1 | CMake Warning at CMakeLists.txt:99 (message): |
A warining of missing hdf5 is clearly reasonable and acceptable. Then press g.
Finally, it’s the time for us to cross build flann.
1 | longervision-GT72-6QE% make -j8 |
VERY Important:
Finally, you’ll see flann has been successfully cross compiled as:
1 | ....../flann/src/cpp/flann/algorithms/autotuned_index.h: In member function 'void flann::AutotunedIndex<Distance>::optimizeKMeans(std::vector<flann::AutotunedIndex<Distance>::CostData>&) [with Distance = flann::L2<double>]': |
You can now:
make install to install the built/generated libraries installed under CMAKE_INSTALL_PREFIXBTW: do NOT forget to install the header files.
Multilib/multiarch seems to be problematic nowadays. Please pay attention to Multilib/multiarch. Some related issues are enumuated as the end of this blog.
Alright, that’s all for today. Let me go to bed. Good bye…
Today, we are going to talk about a fabulous project: stereo vision on a zynq-7010 board.
We are using a VCSBC nano Z-RH-2 board for today’s experiment. The board adopted looks like the following:
| Front | Back | Connector |
|---|---|---|
![]() |
![]() |
![]() |
More detailed specifications, please refer to Vision Components’s official website.
After you set up a static IP for this Vision Components SBC, it’s pretty straightforward to ssh into the system.
1 | longervision-GT72-6QE% ssh user@192.168.1.79 |
Currently, VC provides Linux Kernel 3.14.79.
1 | user@VC-Z:~$ uname -a |
And, let’s take a look at the dual ARMv7 CPUs on zynq-7010.
1 | user@VC-Z:~$ cat /proc/cpuinfo |
Sorry everybody. Today, I ONLY test stereo vision on ARM. I’ll try to figure out how to flash an open source IP of stereo vision onto zynq-7010, or write my own ASAP.
Hmmmmmmmm… It’s better I keep my code in dark???
Let’s try out the stereo vision on some .pgm image pairs FIRST.
1 | user@VC-Z:~/longervision$ ./pgmpair ../images/aloe_left.pgm ../images/aloe_right.pgm |
My GOD… It’s UNBELIEVABLY SLOW.
| aloe_left | aloe_right |
|---|---|
![]() |
![]() |
| aloe_left Stereo | aloe_right Stereo |
![]() |
![]() |
The BEST demo code to test Vision Componenets Stereo Vision is Eclipse_Example_Projects_VC_Z.
1 | #./imageCaptureTest |
1 | 03:41:47[root@VC-Z] /home/user/vc/Eclipse_Example_Projects_VC_Z/imageCaptFPS |
The above 2 examples are directly run on Vision Components’s board without display, for the given cable is of VGA connector, which is ALREADY outdated for many years. Therefore, in order to show the captured image pairs, in the next section, we’ll have to stream the captured data to a host computer, and display the real-time video pairs.
Due to Vision Componenets’ Official documentation VCLinux_Getting_Started.pdf, images captured from camera can be displayed on a remote hosting PC.
Eclipse_Example_Projects_VC_Z.zip provides some source code, which displays images captured from camera on the hosting PC’s Eclipse, by adding the camera as a Remote System to Eclipse.
Anyway, by using this method, you need to prepare ALL the following software and packages in advance.
Besides the above method, a much more straightforward method is to adopt python script vcimgnetclient.py provided by Vision Components. However, vcimgnetclient.py is ONLY python2 compatible, and Vision Components has NO plan to provide a python3 compatible version of vcimgnetclient.py.
Therefore, the KEY to use the 2nd method is to make vcimgnetclient.py python3 compatible.
1 | longervision-GT72-6QE% 2to3 vcimgnetclient.py |
Please refer to General Porting Tips
1 | longervision-GT72-6QE% ./pygi-convert.sh vcimgnetclient.py |
1 | longervision-GT72-6QE% python vcimgnetclient.py |
For ALL vbox.pack_start, add one parameter 0 at the end for each case. For instance:
vbox.pack_start(self.menu_bar, False, False)
to
vbox.pack_start(self.menu_bar, False, False, 0)
Oh my god … There are STILL SO MANY things to do, in order to make vcimgnetclient.py Python3 compatible. Therefore, I implement my own vcimgnetclient_qt.py based on PyQt5.
1 | 22:42:22[root@VC-Z] /home/user |
Sorry, I’m NOT going to show my code, but the performance can be demonstrated as follows:

Oh-my-god, kaggle competition about diabetis… Hmmm, I have something to do these days… It’s also a GOOD chance for me to learn some medical terms… In this blog, we’re going to try out this kaggle competition - APTOS 2019 Blindness Detection.
It’s NOT hard for us to locate APTOS [UpdatedV14] Preprocessing- Ben’s & Cropping, which seems to me a professional snippet of code preprocessing diabetic retinopathy images. Let’s just try it out directly.
The implementation is cited directly from APTOS [UpdatedV14] Preprocessing- Ben’s & Cropping, with trivial modifications. You are welcome to download my snippet of code here.